4 research outputs found

    Direct Nonparametric Predictive Inference Classification Trees

    Get PDF
    Classification is the task of assigning a new instance to one of a set of predefined categories based on the attributes of the instance. A classification tree is one of the most commonly used techniques in the area of classification. In recent years, many statistical methodologies have been developed to make inferences using imprecise probability theory, one of which is nonparametric predictive inference (NPI). NPI has been developed for different types of data and has been successfully applied to several fields, including classification. Due to the predictive nature of NPI, it is well suited for classification, as the nature of classification is explicitly predictive as well. In this thesis, we introduce a novel classification tree algorithm which we call the Direct Nonparametric Predictive Inference (D-NPI) classification algorithm. The D-NPI algorithm is completely based on the NPI approach, and it does not use any other assumptions. As a first step for developing the D-NPI classification method, we restrict our focus to binary and multinomial data types. The D-NPI algorithm uses a new split criterion called Correct Indication (CI), which is completely based on NPI and does not use any additional concepts such as entropy. The CI reflects how informative attribute variables are, hence if the attribute variable is very informative, it gives high NPI lower and upper probabilities for CI. In addition, the CI reports the strength of the evidence that the attribute variables will indicate regarding the possible class state for future instances, based on the data. The performance of the D-NPI classification algorithm is compared against several classification algorithms from the literature, including some imprecise probability algorithms, using different evaluation measures. The experimental results indicate that the D-NPI classification algorithm performs well and tends to slightly outperform other classification algorithms. Finally, a study of the D-NPI classification tree algorithm with noisy data is presented. Noisy data are data that contain incorrect values for the attribute variables or class variable. The performance of the D-NPI classification tree algorithm with noisy data is studied and compared to other classification tree algorithms when different levels of random noise are added to the class variable or to attribute variables. The results indicate that the D-NPI classification algorithm performs well with class noise and slightly outperforms other classification algorithms, while there is no single classification algorithm that acts as the best performing algorithm with attribute noise

    A Generalized Residual-Based Test for Fractional Cointegration in Panel Data with Fixed Effects

    No full text
    Asymptotic theories for fractional cointegrations have been extensively studied in the context of time series data, with numerous empirical studies and tests having been developed. However, most previously developed testing procedures for fractional cointegration are primarily designed for time series data. This paper proposes a generalized residual-based test for fractionally cointegrated panels with fixed effects. The test’s development is based on a bivariate panel series with the regressor assumed to be fixed across cross-sectional units. The proposed test procedure accommodates any integration order between [0,1], and it is asymptotically normal under the null hypothesis. Monte Carlo experiments demonstrate that the test exhibits better size and power compared to a similar residual-based test across varying sample sizes

    Kibria–Lukman-Type Estimator for Regularization and Variable Selection with Application to Cancer Data

    No full text
    Following the idea presented with regard to the elastic-net and Liu-LASSO estimators, we proposed a new penalized estimator based on the Kibria–Lukman estimator with L1-norms to perform both regularization and variable selection. We defined the coordinate descent algorithm for the new estimator and compared its performance with those of some existing machine learning techniques, such as the least absolute shrinkage and selection operator (LASSO), the elastic-net, Liu-LASSO, the GO estimator and the ridge estimator, through simulation studies and real-life applications in terms of test mean squared error (TMSE), coefficient mean squared error (βMSE), false-positive (FP) coefficients and false-negative (FN) coefficients. Our results revealed that the new penalized estimator performs well for both the simulated low- and high-dimensional data in simulations. Also, the two real-life results show that the new method predicts the target variable better than the existing ones using the test RMSE metric

    BLOOD TRANSFUSION REACTION IN EMERGENCY DEPARTMENT

    No full text
    Early detection, rapid cessation of the transfusion, early consultation with the hematologic and ICU departments, and fluid resuscitation are all necessary for the initial management of blood transfusion reactions. It is crucial that doctors stay current with the literature, are knowledgeable of the pathophysiology, early therapy, and hazards of each type of transfusion reaction because blood transfusions can result in major adverse consequences. Immune-mediated transfusion responses frequently result from a mismatch or incompatibility between the recipient and the transfused substance. Monitoring the patient's respiration rate, blood pressure, temperature, and pulse rate is necessary. Abnormal clinical characteristics, such as fever, rashes, or angioedema, should also be constantly evaluated
    corecore